Dissertation von Martin Galanda
نویسنده
چکیده
Polygonal subdivisions, i.e. the representation of categorical data in the vector model, are a common data type in GIS applications, in thematic maps, in topographic maps and in digital landscape models. Their cartographic generalization is termed polygon generalization. Cartographic generalization or generalization is one of the basic principles of cartography, namely the legible and comprehensible visualization of geographic data at a certain scale and for a given purpose. For the generalization of polygonal subdivisions, a multitude of methods, such as generalization algorithms, measures and generalization constraints, were in the past proposed in isolation from each other. However, what is missing is a comprehensive framework for the orchestration of these tools, that is, their integration into an automated and comprehensive generalization process. Hence, this thesis studies the automation of polygon generalization by means of a multi-agent system (MAS). In doing so, the work extends previous research carried out by the AGENT consortium (i.e. by a consortium of Institut Géographique National France, University of Edinburgh, University of Zurich, Institut National Polytechnique de Grenoble and Laser-Scan Ltd.). In this thesis, a discussion of possible approaches to automated generalization leads to the proposal of a framework for comprehensive, automated and agent-based polygon generalization. MAS technology together with the concepts developed by the AGENT consortium offer distinct benefits for the automation of map generalization. These benefits include the capabilities 1) to compromise between different generalization constraints associated with a cartographic object, 2) to coordinate the generalization of objects at different spatial levels and 3) to model holistic decision making in map generalization. After listing the generic properties of agents spatial levels of polygon generalization are identified, namely map, group, polygon and line. Each of them is linked to a specific agent type. Both the process of polygon generalization based on a multi agent system as well as the evolution of an agent during the generalization process are discussed theoretically. Next, a worked example clarifies and illustrates the concepts and methods embedded in the proposed framework. Prior to the implementation of the framework generalization algorithms that make use of energy minimization techniques and generalization constraints for polygon generalization are studied. Algorithms are essential to the conflict resolution while constraints control the agentbased generalization process. The application of an energy minimizing technique, snakes, is investigated for resolving size conflicts (i.e. a polygon too small with respect to the target scale) and proximity conflicts (i.e. polygons are too close to each other) in polygonal subdivisions. The usage of a single snakes-based algorithm is proposed, which can be controlled in such a way that it achieves the displacement, enlargement and exaggeration of polygons or an arbitrary combination of these operations. Thus, size and proximity conflicts within a group of polygons can be solved simultaneously, that is, a holistic solution of such conflicts is accomplished. Moreover, the proposed algorithm enables the direct integration of the update of the neighbors of a modified polygon into the transformation process. The main drawbacks identified are the difficult setup and fine-tuning of snakes parameters and the computational resources required by the algorithm. However, the experiments emphasize that the algorithm constitutes an improvement in comparison to existing algorithms for resolving size and proximity conflicts as well as to sequential approaches of propagation. The computational
منابع مشابه
Genetic analysis of the FLRT family of proteins during early mouse embryonic development
Ich versichere hiermit ehrenwörtlich, dass die vorgelegte Dissertation von mir selbständig und ohne unerlaubte Beihilfe angefertigt ist.
متن کاملIn vitro characterization of a novel murine oligodendrocyte precursor cell line and its infection with Theiler’s murine encephalomyelitis virus
Kumulative Dissertationen: Bei kumulativen Dissertationen sowie Habilitationen wird von der Bibliothek eine Pdf-Datei ins Netz gestellt. Da in diesen Dateien die Originalarbeiten integriert sind, wird die Veröffentlichung der kumulativen Dissertation und Habilitation im Netz der TiHo künftig nur mit Verweis auf die Literaturstellen der Originalarbeiten erfolgen. Eine Änderung der Promotions-, P...
متن کاملVisuelles Komponieren und Testen von Komponenten am Beispiel von Agenten im elektronischen Handel
In recent years software development with components became more popular. Component technology promises efficient software development through reliability and re-usability. A drawback is that there are no development environments which support all development steps for component technology. Therefore, tools to support component development will be investigated in this dissertation. Criteria to ...
متن کاملVerbesserung der Programmierbarkeit und Performance-Portabilität von Manycore-Prozessoren
Parallele Prozessoren sind heutzutage, in allen Arten von Rechnersystemen zu finden: von großen Datenzentren bis zu den kleinsten mobilen Geräten. Die Programmierung dieser modernen parallelen Rechnersysteme ist aufwändig und fehleranfällig. Um optimale Performance zu erreichen, muss Software zusätzlich speziell angepasst werden. Dabei muss dieser Optimierungsprozess zurzeit für jede neue Proze...
متن کاملInteraktives Debugging von Wissensbasen
In heutigen Zeiten, wo automatisierte intelligente Applikationen aus unserem Leben kaum noch wegzudenken sind, ist es von entscheidender Bedeutung, dass die solchen Systemen zugrunde liegenden Wissensbasen hohen Qualitätsanforderungen gerecht werden. Als Kurzbeschreibung der gleichnamigen Dissertation motiviert dieser Beitrag den Einsatz von interaktivem Debugging von Wissensbasen durch das Auf...
متن کاملThe maximum-margin approach to learning text classifiers: methods theory, and algorithms
Diese Dissertation entwickelt und erforscht einen neuen Ansatz zum Lernen von Textklassifikationsregeln aus Beispielen. Der Ansatz stützt sich auf die Einsicht, dass bei der Textklassifikation nicht die Anzahl der Attribute die Schwierigkeit einer Lernaufgabe bestimmt, sondern dass dimensionsunahängige Komplexitätsmaße notwendig sind. Die Dissertation zeigt den Zusammenhang dieser Maße mit den ...
متن کامل